21 research outputs found

    The Strahler number of a parity game

    Get PDF
    The Strahler number of a rooted tree is the largest height of a perfect binary tree that is its minor. The Strahler number of a parity game is proposed to be defined as the smallest Strahler number of the tree of any of its attractor decompositions. It is proved that parity games can be solved in quasi-linear space and in time that is polynomial in the number of vertices~n and linear in (d/2k)k, where d is the number of priorities and k is the Strahler number. This complexity is quasi-polynomial because the Strahler number is at most logarithmic in the number of vertices. The proof is based on a new construction of small Strahler-universal trees. It is shown that the Strahler number of a parity game is a robust parameter: it coincides with its alternative version based on trees of progress measures and with the register number defined by Lehtinen~(2018). It follows that parity games can be solved in quasi-linear space and in time that is polynomial in the number of vertices and linear in (d/2k)k, where k is the register number. This significantly improves the running times and space achieved for parity games of bounded register number by Lehtinen (2018) and by Parys (2020). The running time of the algorithm based on small Strahler-universal trees yields a novel trade-off k⋅lg(d/k)=O(logn) between the two natural parameters that measure the structural complexity of a parity game, which allows solving parity games in polynomial time. This includes as special cases the asymptotic settings of those parameters covered by the results of Calude, Jain Khoussainov, Li, and Stephan (2017), of Jurdziński and Lazić (2017), and of Lehtinen (2018), and it significantly extends the range of such settings, for example to d=2O(lgn√) and k=O(lgn−√)

    A Technique to Speed up Symmetric Attractor-Based Algorithms for Parity Games

    Get PDF
    The classic McNaughton-Zielonka algorithm for solving parity games has excellent performance in practice, but its worst-case asymptotic complexity is worse than that of the state-of-the-art algorithms. This work pinpoints the mechanism that is responsible for this relative underperformance and proposes a new technique that eliminates it. The culprit is the wasteful manner in which the results obtained from recursive calls are indiscriminately discarded by the algorithm whenever subgames on which the algorithm is run change. Our new technique is based on firstly enhancing the algorithm to compute attractor decompositions of subgames instead of just winning strategies on them, and then on making it carefully use attractor decompositions computed in prior recursive calls to reduce the size of subgames on which further recursive calls are made. We illustrate the new technique on the classic example of the recursive McNaughton-Zielonka algorithm, but it can be applied to other symmetric attractor-based algorithms that were inspired by it, such as the quasi-polynomial versions of the McNaughton-Zielonka algorithm based on universal trees

    Adaptive synchronisation of pushdown automata

    Get PDF
    We introduce the notion of adaptive synchronisation for pushdown automata, in which there is an external observer who has no knowledge about the current state of the pushdown automaton, but can observe the contents of the stack. The observer would then like to decide if it is possible to bring the automaton from any state into some predetermined state by giving inputs to it in an adaptive manner, i.e., the next input letter to be given can depend on how the contents of the stack changed after the current input letter. We show that for non-deterministic pushdown automata, this problem is 2-EXPTIME-complete and for deterministic pushdown automata, we show EXPTIME-completeness. To prove the lower bounds, we first introduce (different variants of) subset-synchronisation and show that these problems are polynomial-time equivalent with the adaptive synchronisation problem. We then prove hardness results for the subset-synchronisation problems. For proving the upper bounds, we consider the problem of deciding if a given alternating pushdown system has an accepting run with at most k leaves and we provide an n^O(k²) time algorithm for this problem

    Simple and tight complexity lower bounds for solving Rabin games

    Full text link
    We give a simple proof that assuming the Exponential Time Hypothesis (ETH), determining the winner of a Rabin game cannot be done in time 2o(klogk)nO(1)2^{o(k \log k)} \cdot n^{O(1)}, where kk is the number of pairs of vertex subsets involved in the winning condition and nn is the vertex count of the game graph. While this result follows from the lower bounds provided by Calude et al [SIAM J. Comp. 2022], our reduction is simpler and arguably provides more insight into the complexity of the problem. In fact, the analogous lower bounds discussed by Calude et al, for solving Muller games and multidimensional parity games, follow as simple corollaries of our approach. Our reduction also highlights the usefulness of a certain pivot problem -- Permutation SAT -- which may be of independent interest.Comment: 10 pages, 5 figures. To appear in SOSA 202

    A technique to speed up symmetric attractor-based algorithms for parity games

    Get PDF
    The classic McNaughton-Zielonka algorithm for solving parity games has excellent performance in practice, but its worst-case asymptotic complexity is worse than that of the state-of-the-art algorithms. This work pinpoints the mechanism that is responsible for this relative underperformance and proposes a new technique that eliminates it. The culprit is the wasteful manner in which the results obtained from recursive calls are indiscriminately discarded by the algorithm whenever subgames on which the algorithm is run change. Our new technique is based on firstly enhancing the algorithm to compute attractor decompositions of subgames instead of just winning strategies on them, and then on making it carefully use attractor decompositions computed in prior recursive calls to reduce the size of subgames on which further recursive calls are made. We illustrate the new technique on the classic example of the recursive McNaughton-Zielonka algorithm, but it can be applied to other symmetric attractor-based algorithms that were inspired by it, such as the quasi-polynomial versions of the McNaughton-Zielonka algorithm based on universal trees

    On history-deterministic one-counter nets

    Get PDF

    Solving Two-Player Games under Progress Assumptions

    Full text link
    This paper considers the problem of solving infinite two-player games over finite graphs under various classes of progress assumptions motivated by applications in cyber-physical system (CPS) design. Formally, we consider a game graph G, a temporal specification Φ\Phi and a temporal assumption ψ\psi, where both are given as linear temporal logic (LTL) formulas over the vertex set of G. We call the tuple (G,Φ,ψ)(G,\Phi,\psi) an 'augmented game' and interpret it in the classical way, i.e., winning the augmented game (G,Φ,ψ)(G,\Phi,\psi) is equivalent to winning the (standard) game (G,ψ    Φ)(G,\psi \implies \Phi). Given a reachability or parity game (G,Φ)(G,\Phi) and some progress assumption ψ\psi, this paper establishes whether solving the augmented game (G,Φ,ψ)(G,\Phi,\psi) lies in the same complexity class as solving (G,Φ)(G,\Phi). While the answer to this question is negative for arbitrary combinations of Φ\Phi and ψ\psi, a positive answer results in more efficient algorithms, in particular for large game graphs. We therefore restrict our attention to particular classes of CPS-motivated progress assumptions and establish the worst-case time complexity of the resulting augmented games. Thereby, we pave the way towards a better understanding of assumption classes that can enable the development of efficient solution algorithms in augmented two-player games.Comment: VMCAI 2024. arXiv admin note: text overlap with arXiv:1904.12446 by other author

    Prognostic significance of primary tumour volume in nasopharyngeal carcinoma – a single institute study

    Get PDF
    Introduction: Nasopharyngeal carcinoma is very uncommon in the southern part of India, the age-adjusted incidence rate is less than 1 per 1,00,000 population. This study is undertaken to evaluate the outcome of nasopharyngeal carcinoma and its correlation with Primary tumor volume. Material and methods: Total of 50 non-metastatic nasopharyngeal carcinoma patients treated with concurrent chemo radiation between January 2013 and December 2015 were included in the study. All patients were treated via IMRT with dose of 66-70Gy, along with concurrent chemotherapy. Initial tumour volume was measured from CT based contouring and mean dose delivered was calculated. All patients were followed up for survival, relapse and metastasis. Results: The median follow up for the group was 24 months. The median Gross tumor volume of primary disease and nodal disease was 61.6 cubic centimetres and 35.4 cubic centimeters respectively. The 2 year Disease free survival and Overall survival for the entire group was 64% and 68% respectively. There was significant difference (p-0.018) between disease free survival of low volume disease group (LVD) which was 78 % as compared to high volume disease (HVD) group 52 % at 24 months, similarly Overall survival was also significantly better (p-0.015) in LVD group as compared to HVD group 80% vs 55% at 24 months. Among the treatment related factors adjuvant chemotherapy significantly improved the outcome in HVD group but no difference was seen in LVD group. Conclusion: Our patients had large volume primary disease, the OS and DFS was significantly better in LVD patients, adjuvant chemotherapy after concurrent chemoradiotherapy had no additional benefit for LVD patients but improved DFS and MFS in HVD Patients

    Solving two-player games under progress assumptions

    No full text
    This paper considers the problem of solving infinite two-player games over finite graphs under various classes of progress assumptions motivated by applications in cyber-physical system (CPS) design. Formally, we consider a game graph , a temporal specification and a temporal assumption , where both and are given as linear temporal logic (LTL) formulas over the vertex set of . We call the tuple an augmented game and interpret it in the classical way, i.e., winning the augmented game is equivalent to winning the (standard) game . Given a reachability or parity game and some progress assumption , this paper establishes whether solving the augmented game lies in the same complexity class as solving . While the answer to this question is negative for arbitrary combinations of and , a positive answer results in more efficient algorithms, in particular for large game graphs. We therefore restrict our attention to particular classes of CPS-motivated progress assumptions and establish the worst-case time complexity of the resulting augmented games. Thereby, we pave the way towards a better understanding of assumption classes that can enable the development of efficient solution algorithms in augmented two-player games
    corecore